On the Convergence of Markovian Stochastic Algorithms with Rapidly Decreasing Ergodicity Rates

نویسنده

  • LAURENT YOUNES
چکیده

We analyse the convergence of stochastic algorithms with Markovian noise when the ergodicity of the Markov chain governing the noise rapidly decreases as the control parameter tends to innnity. In such a case, there may be a positive probabilityof divergence of the algorithm in the classic Robbins-Monro form. We provide modiications of the algorithm which ensure convergence. Moreover, we analyse the asymptotic behaviour of these algorithms and state a diiusion approximation theorem. 1. Introduction Stochastic algorithms of Robbins-Monro type with Markovian noise form a category of processes for which almost sure convergence cannot be obtained in general. The reason is that the ergodicity of the Markov chain governing the noise may decrease when the control parameter tends to innnity, and trap the algorithm within an exploding regime. In this paper, we study rigorously a natural strategy in which more time is spent for estimating the variations of the control parameter for large values of this parameter. In particular, we give conditions under which this strategy converges.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

On $L_1$-weak ergodicity of nonhomogeneous continuous-time Markov‎ ‎processes

‎In the present paper we investigate the $L_1$-weak ergodicity of‎ ‎nonhomogeneous continuous-time Markov processes with general state‎ ‎spaces‎. ‎We provide a necessary and sufficient condition for such‎ ‎processes to satisfy the $L_1$-weak ergodicity‎. ‎Moreover‎, ‎we apply‎ ‎the obtained results to establish $L_1$-weak ergodicity of quadratic‎ ‎stochastic processes‎.

متن کامل

Convergence of Adaptive Markov Chain Monte Carlo Algorithms

In the thesis, we study ergodicity of adaptive Markov Chain Monte Carlo methods (MCMC) based on two conditions (Diminishing Adaptation and Containment which together imply ergodicity), explain the advantages of adaptive MCMC, and apply the theoretical result for some applications. First we show several facts: 1. Diminishing Adaptation alone may not guarantee ergodicity; 2. Containment is not ne...

متن کامل

Almost sure exponential stability of stochastic reaction diffusion systems with Markovian jump

The stochastic reaction diffusion systems may suffer sudden shocks‎, ‎in order to explain this phenomena‎, ‎we use Markovian jumps to model stochastic reaction diffusion systems‎. ‎In this paper‎, ‎we are interested in almost sure exponential stability of stochastic reaction diffusion systems with Markovian jumps‎. ‎Under some reasonable conditions‎, ‎we show that the trivial solution of stocha...

متن کامل

Stability of Markovian Processes III: Foster-Lyapunov Criteria for Continuous-Time Processes

In Part I we developed stability concepts for discrete chains, together with Foster-Lyapunov criteria for them to hold. Part II was devoted to developing related stability concepts for continuous-time processes. In this paper we develop criteria for these forms of stability for continuous-parameter Markovian processes on general state spaces, based on Foster-Lyapunov inequalities for the extend...

متن کامل

Global Convergence of Langevin Dynamics Based Algorithms for Nonconvex Optimization

We present a unified framework to analyze the global convergence of Langevin dynamics based algorithms for nonconvex finite-sum optimization with n component functions. At the core of our analysis is a direct analysis of the ergodicity of the numerical approximations to Langevin dynamics, which leads to faster convergence rates. Specifically, we show that gradient Langevin dynamics (GLD) and st...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1998